Goto

Collaborating Authors

 Lewis County


IELM: An Open Information Extraction Benchmark for Pre-Trained Language Models

Wang, Chenguang, Liu, Xiao, Song, Dawn

arXiv.org Artificial Intelligence

We introduce a new open information extraction (OIE) benchmark for pre-trained language models (LM). Recent studies have demonstrated that pre-trained LMs, such as BERT and GPT, may store linguistic and relational knowledge. In particular, LMs are able to answer ``fill-in-the-blank'' questions when given a pre-defined relation category. Instead of focusing on pre-defined relations, we create an OIE benchmark aiming to fully examine the open relational information present in the pre-trained LMs. We accomplish this by turning pre-trained LMs into zero-shot OIE systems. Surprisingly, pre-trained LMs are able to obtain competitive performance on both standard OIE datasets (CaRB and Re-OIE2016) and two new large-scale factual OIE datasets (TAC KBP-OIE and Wikidata-OIE) that we establish via distant supervision. For instance, the zero-shot pre-trained LMs outperform the F1 score of the state-of-the-art supervised OIE methods on our factual OIE datasets without needing to use any training sets. Our code and datasets are available at https://github.com/cgraywang/IELM


Language Models are Open Knowledge Graphs

Wang, Chenguang, Liu, Xiao, Song, Dawn

arXiv.org Artificial Intelligence

This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e.g., BERT, GPT-2/3), without human supervision. Popular KGs (e.g, Wikidata, NELL) are built in either a supervised or semi-supervised manner, requiring humans to create knowledge. Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training. The stored knowledge has enabled the language models to improve downstream NLP tasks, e.g., answering questions, and writing code and articles. In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs. We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora. We demonstrate the quality of the constructed KGs by comparing to two KGs (Wikidata, TAC KBP) created by humans. Our KGs also provide open factual knowledge that is new in the existing KGs. Our code and KGs will be made publicly available.


Facebook's first blind engineer is revolutionizing social media as we know it

#artificialintelligence

Halfway through Matt King's presentation, the screen goes dark. It's the kind of glitch that might make a man sweat in front of the audience. But this is no glitch. King has done it deliberately to bring us into his world, however disorienting it might be for the rest of the room. "I'm going to put it into a state that's more like how I operate," King says with just a hint of mischief.


Facebook's first blind engineer is revolutionizing social media as we know it

#artificialintelligence

Halfway through Matt King's presentation, the screen goes dark. It's the kind of glitch that might make a man sweat in front of the audience. But this is no glitch. King has done it deliberately to bring us into his world, however disorienting it might be for the rest of the room. "I'm going to put it into a state that's more like how I operate," King says with just a hint of mischief.